Communication Systems

Communication SystemsIntroduction to Communication SystemsIntroductionCommunication ChannelNoise in CommunicationsTransmitter and ReceiverDigital vs Analog CommunicationObjectives of System DesignCommunication networksProbability and Random ProcessesRandom VariablesCDF and PDFMean and VarianceProbability DistributionsNormal/Gaussian DistributionUniform DistributionJoint DistributionJoint Distribution of RVsRandom ProcessesStatistics of a Random ProcessStationary Random ProcessesProperties of the Autocorrelation functionErgodic ProcessesPower Spectral Density (PSD)Passing through an linear systemBaseband and Passband signalsEnergy and PowerBandwidthChannelsBaseband SignalsPassband SignalsModulation: Baseband to PassbandDownconversion: Passband to BasebandComplex EnvelopeHilbert TransformPre-EnvelopeComplex EnvelopeNoiseWhite NoiseWhite Noise vs Gaussian NoiseIdeal Low Pass White NoiseBand Pass NoiseProperties of Baseband NoisePower Spectral DensityNoise PowerPhasor RepresentationDistribution of Envelop and PhaseNoise Performance of DSBSignal to Noise RatioTransmitted PowerBaseband Communication SystemDouble Sideband-Suppressed Carrier (DSB-SC) ModulationSynchronous Detection for DSB-SCNoise Performance of SSB and conventional AMSingle Sideband SignalsNoise in SSBOutput SNRStandard AM: Synchronous DetectionOutput SNRComparisonNon-coherent Receiver and envelope DetectionEnvelope detection circuitEnvelope Detection for Standard AMSmall Noise CaseLarge Noise CaseLarge Noise Case: Threshold EffectSummaryFrequency ModulationFM revisionFM BasicsFM ReceiverHigh SNR propertiesPhase noise in high SNRNoise PSDNoise PowerOutput SNRBandwidth-SNR tradeoffPre/de-emphasis for FM and comparison of Analog systemsThreshold EffectImproving output SNRPre-emphasis and de-emphasisImprovement FactorComparison of Analog SystemsDigital Representation of SignalsSampling and QuantizationQuantizersSNRPulse Coded Modulation (PCM)Uniform Quantization ProblemsSolution: Nonuniform QuantizationSolution: CompandingCompander Standards: -law vs A-lawLine CodingDesired Properties of a Line codeTypes of Line codesBaseband Digital TransmissionMatched FitlerDerivationPropertiesBinary Baseband Communication SystemNoise DistributionDecisionErrorsDerivationOptimum ThresholdCalculation of Q-function boundariesDigital ModulationDigital Passband ModulationDemodulationASK Coherent DemodulationBit Error Rate (BER)PSK AnalysisBit Error RateFrequency Shift KeyingBit Error Rate for FSKSum of two RVsNon coherent DemodulationASK non coherent demodulationRayleigh DistributionDerivationRician DistributionDerivationError ProbabilityNon coherent demodulation of FSKDifferential PSKDifferential DemodulationSummary and ComparisonEntropy and Data CompressionSource Coding TheoryAverage Codeword LengthMinimum Codeword LengthThe general caseSource Coding TheoremHuffman CodingChannel CapacityIntroductionConditional EntropyMutual InformationChannel Coding TheoremBinary Symmetric ChannelAdditive White Gaussian Noise (AWGN) channelShannon LimitNoise and ErrorsChannel ModelBlock CodesLinear Block CodesGenerator MatrixHamming DistanceError Detection and CorrectionError DetectionError Correction

Introduction to Communication Systems

Introduction

There are four basic elements of communication:

Block diagram of a communication system.

Communication Channel

The medium through which the information travels is called the communication channel. It can be anything froma cable to optical fibre to air to hard disks etc.

The channel introduces propogation loss, the signal strength decays with distance.

The channel has a bandwidth. The bandwidth is the range of frequencies that can be used for communication. More bandwidth means higher transmission capacity.

The channel can have time variation. This is where the channel characteristics change over time. An example of this is mobile radio channels.

The channel can experience nonlinearity which some elements such as repeaters introduce.

The channel introduces and is prone to noise.

Noise in Communications

Unwanted signals present in communication systems are known as noise. Noise can be external or internal.

External noise is usually interference from nearby channels, human made noise, or natural noise.

Internal noise can be caused by thermal noise or the random motion of electrons amongst other things.

Noise limits the performance of communication systems. A widely used metric for this is the signal to noise ratio (SNR):

Transmitter and Receiver

The transmitter modifies the source signal into a form suitable for transmission over the channel. This involves modulation and up conversion.

Modulation is where some parameter of a carrier wave is varied based on the source signal.

Up conversion is where the modulated signal is converted into the final radio frequency (RF).

The reciever reconstructs the original message by down conversion and demodulation.

Digital vs Analog Communication

Natural signal are analog and are continuous in time and amplitude, whilst digital signals are discrete.

No matter what, the transmitted signal os always analog. Digital vs ANalog refers to how the parameters of these waveforms are formed.

Digital systems convert the source signal into a finite set of source messages and maps them to a set of signals that is tranmitted analogly along a channel. Digital communication is more efficient and reliable than analog, but the design is more sophisticated and complex than analog communication.

Objectives of System Design

There are two primary resources in communications: the transmitted power and the channel bandwidth (which is very expensive). In certain scenarios, one resource may be more limited than the other. In space, power is more important than bandwidth whilst in a telephone, the opposite is the case.

The objectives of communication design are:

Communication networks

Todays communication networks are complicated systems with large numbers of users sharing the same medium.

Diagram of todays complicated communication systems.

These are made up of hosts which communicate with eachother and provide information to share and routers which route the data from the hosts through the network to other routers.

 

Probability and Random Processes

The sample space () is the set of all possible outcomes.

An event is any subset of the sample space.

The probability of an event in the non negative and non zero value that represents the value for which for any subset of .

The probability of two events that do not have any common outcome is the sum of the probabilities of the two events separately.

Random Variables

A random variable is a real values function defined on the set of all possible outcomes . assigns a number to every outcome . is the subset of consisting of all outcomes such that .

must satisfy:

CDF and PDF

The cumuative distribution function (CDF) of a random variable is defined as the probability of being less than :

The probability density function (PDF) is defined as the derivative of the cumulative distribution function:

Mean and Variance

Mean is also known as the expeced value or the DC level in electronics:

Variance is also known as the power for zero mean signals:

Probability Distributions

Normal/Gaussian Distribution

The Normal/Gausian Distribution.

For the normal distribution (where and ):

Uniform Distribution

The Uniform Distribution.

For the uniform distribution (where and ):

Joint Distribution

The joint distribution function for two random variables and is:

The joint probability density function is:

The properties of the joint distribution are:

Independence implies uncorrelation, however uncorrelation does not imply independence.

Only for the Gaussuian distribution does uncorrelation imply independence.

Joint Distribution of RVs

Joint CDF:

Joint PDF:

Independent:

Random variables are independent and identically distributed (i.i.d.) if they are independent and have the same distribution.

The central limit theorem states that for i.i.d. random variables: tends to the Gaussian as tends to infinity.

This is why noise is usually Gaussian.

Random Processes

A random process is a time-varying function that assigns to each outcome a function of time for , where is the total observation interval.

For a fixed sample point , function versus time is called a sample function of the random process.

For fixed , a random process is a random variable.

If one scans all possible outcomes of the underlying random experiment, they shall get an ensemble of signals.

Noise can often be modelled as a Gaussian random process.

The sample of a random process at any point in time is a two function variable.

Statistics of a Random Process

For fixed the random process becomes a random variable:

The Autocorrelation founction measures the correlation between two samples:

Stationary Random Processes

A process is an order Strict Sense Stationary if for any value of :

A process is strict sense stationary if the above holds true for all where where and any , or if it's order stationary for all

The first order set is larger than the second order to .

The order set satisfies the order set (is a subset of).

A random process is stationary to first order if the distribution functio(and hence the density function) of is invariant over time.

is stationary to order if depends only on the difference between and

If is stationary to second order then:

A random process is wide sense stationary if:

In communications, noise and message signals are often modelled as stationary random processes.

Strict sense stationarity always implies wide-sense stationarity, however the converse is only true for Gaussian processes.

Properties of the Autocorrelation function

For a stationary with autocorrelation function :

tells how predictable is based on .

Ergodic Processes

Expectations of a stochastic process are averages "across the process", also referred to as "ensemble averages".

Sometimes it is hard/impossible to observe all sample functions of a random process at a given time, while it may be possible to observe a single sample over a long period of time.

If the time average is equal to the ensembe average, then the process is said to be ergodic. If is finite for all and the system is stable and then passed through an impulse response :

If is wide sense stationary then:

Where is the zero frequency, DC, response of the system.

If is finite for all and the system is stable:

If is wide sense stationary then letting :

Both wide and strict sense process goes through an LTI system, they stay the process that the were before.

Power Spectral Density (PSD)

PSD is a function that measures the distribution of power of a random process over its spectrum.

PSD is defined only for stationary processes.

The Einstein-Wiener-Khintchine relation states: The PSD of a wide sense stationary process is equal to the Fourier transform of its autocorrelation function:

The frequency content of a process depends on how rapidly the amplitude changes as a function of time (measured by the autocrrelation function).

The average power is:

If is a real WSS process then:

The power spectrum is an even function.

Passing through an linear system

If a WSS process goes through a linear system of transfer function :

For a complex process going through a complex valued LTI system, it can be shown that:

If is a Gaussian process then is also a Gaussian process.

 

Baseband and Passband signals

Energy and Power

Energy of a signal is:

Power is the time average of energy, computed over a large interval:

For a sinusoid:

Bandwidth

Bandwidth of a signal quantifies its frequency occupancy. One-sided bandwidth is where only positive frequencies are looked when computing bandwidth for physical signals, such as WiFi.

Physical signals are real-valued in the time domain,hence they are conjugate symmetric in the frequency domain, so they can be specified completely by their spectrum over positive frequencies.

Complex exponentials are used because they are eigenfunctions of LTI systems. Physical signals are real-valued in the time domain so they must satisfy conjugate symmetry meaning all the information resides in either positive or negative frequencies. Therefore physical bandwidth is one sided bandwidth.

Channels

Channels often approximated as LTI systems, signals pass through the channel, and then noise is added. Channels allocated/described typically in terms of frequency bands:

Baseband Signals

Baseband signals have energy/power concentrated in a band around DC:

Baseband signals.

Real baseband signal of bandwidth : has conjugate symmetry. Note that is symmetric for a real baseband signal.

For communication over a physical baseband channel, physical (real-valued) baseband signals are considered. Whilst for communication over a physical passband channel (discussion coming up), complex-valued baseband signals which provide a convenient mathematical representation for the corresponding passband signals are considered.

Passband Signals

Passband signals have energy/power concentrated in a band away from DC:

Passband signals.

Only physical (real-valued) passband signals are considered, hence their spectra always obey conjugate symmetry.

Examples of baseband signals are: Speech and audio; Two-level digital signals.

Often such signals need to be sent over a passband channel (e.g., a 20 MHz WiFi channel at 2.4 GHz) therefore they need to be modulated.

Modulation: Baseband to Passband

Modulation is to translate to passband by multiplying by a sinusoid. For a real-valued baseband message signal :

Baseband to Passband signal modulation.

Carrier frequency should be bigger than the message bandwidth to keep away from DC.

The in phase and quadrature components can be modulated separately using cosine and sine of the carrier. The I and Q components are real baseband signals that contain all the information whilst the sinusoids are rapidly varying but predictable so they contain no information:

This all happens at the transmitter.

Downconversion: Passband to Baseband

To filter out the in phase and Quadrature components the receiver needs to be coherent (phase and frequency of the copy of the carrier at the receiver needs to be the same as that of the incoming signal).

The I and Q channels are orthogonal so information can be sent in parallel on these channels.

The passband waveforms and are also orthogonal.

There is a passband at because:

The passband signal can be mapped to a pair of real baseband signals meaning passband modulation is two-dimensional.

Passband modulation plotted on the complex plane.

The three equivalent representations of the passband signal are: Rectangular coordinates (I and Q); Polar coordinates (Envelope and Phase); Complex number (Complex envelope).

Each representation of a complex number corresponds to a time domain expression for the passband signal:

Time Domain Expressions for a Passband Signal:

Complex Envelope

All information in a passband signal is contained in its complex envelope. Complex baseband representation can be defined for arbitrary , as long as .

If = then:

Hilbert Transform

Hilbert transform of a signal is defined as:

HT is a linear transformation. Its inverse is given by:

The Hilbert transform of is .

In the Fourier domain:

Pre-Envelope

The pre-envelope of a signal is:

Its Fourier transform is:

Pre-envelope removes the negative frequency components.

Similarly the pre-envelope for negative frequencies:

Complex Envelope

Consider arbitrary complex-valued baseband signal , whose spectrum is limited to :

To show that is a real-valued passband signal concentrated around :

Then:

If is a real valued passband signal:

 

Noise

Noise is unwanted waves that disturb the transmission of signals.

Noise comes from:

Both are often stationary and have zero-mean Gaussian distributions.

White Noise

The power spectral density (PSD) is constant over all frequencies:

The half factor indicates that half the power is associated with positive frequencies and half with negative.

The term white is analogous to white light which contains equal amounts of all frequencies (within the visible band of EM wave). It is only defined for stationary noise.

An infinite bandwidth is a purely theoretic assumption, it is valid as long as the noise PSD is flat over the bandwidth of interest.

White Noise vs Gaussian Noise

White noise Gaussian noise. Gaussian noise is where the distribution at any time instant is Gaussian. The samples at different time instants are uncorrelated. It is typically assumed that noise is additive white Gaussian noise (AWGN).

Ideal Low Pass White Noise

Suppose white noise is applied to an ideal low-pass filter of bandwidth such that:

By Einstein-Wiener-Khinchine relation, autocorrelation function:

Samples at Nyquist frequency are uncorrelated:

Ideal Low-Pass White Noise.

Band Pass Noise

Any communication system that uses carrier modulation will typically have a band-pass filter of bandwidth at the receiver front-end.

Band Pass Noise.

Any noise that enters the receiver will therefore be band-pass in nature. Its spectral magnitude is non-zero only for some band concentrated around the carrier frequency (sometimes called narrowband noise).

Narrow Band Noise.

can be written in canonical form:

and are fully representative of band-pass noise.

Band Pass Noise Generation.

Properties of Baseband Noise

Power Spectral Density

Both in-phase and quadrature components have the same PSD:

This follows from:

and have the same PSD.

Noise Power

For ideally filtered narrowband noise, the PSDs of and are therefore given by:

Ideally filtered narrow band noise.

The average power in each of the baseband waveforms and is identical to the average power in the bandpass noise waveform .

For ideally filtered narrowband noise, the variance of and is each:

Phasor Representation

Band-pass noise may be written in the alternative form:

The envelope of the noise is:

The phase of the noise is:

Phasor representation.

Distribution of Envelop and Phase

It can be shown that if and are Gaussian-distributed, then the magnitude has a Rayleigh distribution, and the phase is uniformly distributed.

If a sinusoid is mixed with noise then the magnitude will have a Rice distribution.

 

Noise Performance of DSB

Signal to Noise Ratio

A way to quantify the performance of a modulation scheme is to use the signal to noise ration (SNR) at the output of the receiver:

This is normally expressed in decibels to manage the wide range of power levels in communication systems.

Transmitted Power

Transmitted power is known as . It is limited by equipment and battery cost etc. The higher it is, the greater the received power and SNR.

When comparing various modulation schemes: should remain the same; the baseband signal to noise ratio () as a reference value for comparison.

Baseband Communication System

This doesn't use modulation and is suitable for transmission over wires. The transmission power is identical to message power. If there is no attenuation, then .

Baseband communication.

The baseband SNR is additive white noise with PSD. It is the area under the straight line in the triangle. Since the average power at the reciever is . Therefore the SNR at the output (assuming no propagation loss) is:

The SNR can be improved by increasing the numerator term and/or decreasing the denominator terms.

Double Sideband-Suppressed Carrier (DSB-SC) Modulation

The general form of a DSB-SC signal is:

synchronous detection = product detection = coherent detection.

The recieved and noisy signal is:

Coherent detection.

Synchronous Detection for DSB-SC

Multiply the signal with :

Use a low pass filter to keep:

Signal power at the receiver output:

The power of the noise :

The SNR at the receiver output is:

The DSB-SC system has the same SNR performance as a baseband system.

 

Noise Performance of SSB and conventional AM

Single Sideband Signals

In a single sideband signal, the message can be recovered by moving SSB components left and right by , and low pass filtering. This operation recovers the component. The message is this component.

Single side band modulation.

The Q component is the Hilbert transform of the message:

and have the same power .

The average power is:

Side Bands.

Noise in SSB

The receiver signal is . A bandpass filter on the lower sideband (still denote by the lower-sideband noise). Using coherent detection:

After low pass filtering:

The noise power of the band pass noise is which is halved compared to DSB.

Band Pass Noise Power.

Output SNR

The signal power is:

The SNR at the output is:

For a baseband system with the same transmit power , . Therefore the single sideband achieves the same SNR performance as DSB-SC but only requires half the bandwidth.

Standard AM: Synchronous Detection

Pre detection, the signal is:

Multiplying with :

After the lowpass filter:

Output SNR

The signal power at the reciever output is:

The Noise power is:

The SNR at the reciever output is:

The transmitted power is:

Comparison

Comparison of a baseband signal with the same transmitted power:

Thus:

Since the performance of standard AM with synchronous recovery is worse than the baseband system.

Non-coherent Receiver and envelope Detection

The envelope (or magnitude of complex envelope) does not depend on carrier phase. If the envelope of a passband signal can be extracted, then carrier sync isn't required.

AM envelope detector.

If the signal is DSB modulated but doesn't have a strong carrier compoent, then the envelope only has message magnitude and the envelope detection loses the message sign. If a strong carrier component is used, then an envelope detector and DC block can be used to recover message info.

Envelope detection circuit

AM envelope detector circuit.

The positive carrier cycle causes the capacitor to charge up and reach the value of the envelope, whilst during the negative carrier cycle the capacitor discharges with the RC time constant.

The circuit shouldn't discharge too fast during negative cycles and should react fast enough to follow variations in envelope which depend on message bandwidth :

AM envelope detector circuit operation.

Envelope Detection for Standard AM

The phase diagram of the signals present at an AM reciever are:

Phasor of AM signals at reciever.

The envelope is:

This equation is complicated. Limiting cases can be used to put noise in an additive form.

Small Noise Case

The first approximation is the small noise case:

Then:

Thus:

In terms of the baseband SNR:

This is valid for small noise only.

Large Noise Case

The second approximation is the large noise case: Isolating the small quantity:

Large Noise Case: Threshold Effect

From the phasor diagram: , then:

Using for :

The noise is multiplicative meaning is uniformly distributed over .

If there is no term proportional to message, then information is lost.

The Threshold effect states that below some carrier-to-noise ratio level (very low A), performance of envelope detector deteriorates very rapidly (not the case in coherent detection).

Summary

(De-) Modulation FormatOutput SNRTransmitted PowerBaseband Reference SNRFigure of Merit
AM Coherent Detection
AM Coherent Detection
SSB Coherent Detection
AM Envelope Detection (small noise)
AM Envelope Detection (large noise)PoorPoor

(: carrier amplitude, : power of message signal, : single-sided PSD of noise, : message bandwidth.)

 

Frequency Modulation

FM revision

FM additive noise effects the signal by how much it changes the frequency, whereas in AM the noise corrupts the signal directly. Therefore FM is less affected by noise.

FM is made up of a carrier waveform:

Where is the instantaneous phase angle.

When:

Since instantaneous frequency is varied linearly with message:

Where is the frequency sensitivity.

Hence, assuming that :

The modulated signal is:

The envelope is constant and is a nonlinear function of the message signal . FM signals are equivalent to PM signals where the modulating signal is:

FM Basics

The peak message amplitude is:

The frequency deviation of from the carrier frequency is:

The deviation ratio is:

Where:

Carsons rule states that the transmission bandwidth of FM is:

FM Receiver

FM receiver block diagram.

The bandpass filter removes the signals outside of .

The limiter removes anu amplitude variations becasue an FM signal has a constant envelope.

The discriminator recovers the message signal. It is a device whose instaneous amplitude is proportional to instaneous frequency.

The bandpass lowpass filter removes the out of band noise. It has a bandwidth of .

High SNR properties

FM is nonlinear so suberposition doesn't hold.

For high SNR the noise and message signals are approximately independent of eachother.

Therefore noise doesn't affect the power of the message signal at the output and vice versa.

Phase noise in high SNR

 

Phasor diagram of FM carrier and noise signals.

The instantaneous phase of the resultant phasor:

For a large carrier power (large A):

The discriminator output is:

The noise terms are:

is uniformly distributed over . In a high SNR, it can be shown that is also uniform over . This implies that noise is additive and is independent of the message.

Noise PSD

The output signal power is: where is the average power of the message signal.

It follows that:

Where:

Therefore:

After the low pass filter, the PSD of the noise output is restricted in the band . For wideband FM: .

 

(a) PSD of $N_Q(t)$ of $n(t)$. (b) PSD of $n_d(t)$ at discriminator output. (c) PSD of $n_o(t)$ at receiver output.

Noise Power

Average noise power at receiver output:

Thus:

Average power at the output of an FM receiver:

As increases, the noise decreases. This is known as the noise quieting effect.

Output SNR

Since , the output SNR is:

The transmitted power of an FM waveform is .

This can be a lot higher than AM and is only valid when the carrier power is large compared to noise power.

Bandwidth-SNR tradeoff

In wideband FM, transmission bandwidth is proportional to . At a high SNR, an increase in provides quadratic increases in the output SNR.

 

Pre/de-emphasis for FM and comparison of Analog systems

Threshold Effect

An FM detector exhibits a more pronounced threshold effect than an AM envelope detector. The threshold point is around when signal power is 10 times noise power:\ Carrier to noise ratio:

Below the threshold () the FM receiver breaks. THis can be seen in the phasor diagram.

FM phasor diagram.

As the oise changes randomly, point wanders around .

Improving output SNR

PSD of nosie at detector output is proportional to the square f the frequency and the PSD of a message typically decays towards the ends of its band.

PSD if noise and message at output.

To increase :

Pre-emphasis and de-emphasis

Pre-emphasis and de-emphasis block diagram.

artificially emphasizes high frequency components of the message before noise is introduced.

de-emphasizes high frequency components at the receiver and restores the original PSD of the message.

This usually improves the output SNR by around .

Improvement Factor

For an ideal pair of pre/de-emphasis filters:

PSD of the noise at the output of the de-emphasis filter:

Average power of noise with de-emphasis:

The improvement factor is:

Comparison of Analog Systems

The assumptions that are made:

SNR expressions for various modulation schemes without pre/de-emphasis:

 

Comparison of analog systems.

From this, the following conclusions can be made:

Digital Representation of Signals

Block diagram of Digital Communication.

The advantages of digital are:

Sampling and Quantization

The Nyquist frequency of a signal is the frequency at which a signal can be reconstructed perfectly from its samples. If a signal is band limited to Hz then it can be reconstructed if the samples are taken at a rate Hz.

Quantizers

Quantization is the process of transforming the sample amplitude into a discrete amplitude taken from a finite set of possible amplitudes. The more levels, the better the approximation. For memoryless and instantaneous quantization, quantization at time is independent of other samples.

A quantizer.

Amplitudes are decision levels.

The decision region is , where .

At the quantizer output, the decision region is represented by an amplitude .

, are called the representation or reconstruction levels.

Mapping is the quantizer characteristic.

Graphs of quantization.

Quantization noise is the error between the input and output signals:

Quantization Noise.

The variation in the Quantization noise is:

Where:

If is sufficiently small then it is reasonable to assume that is uniformly distributed over this range:

The quantization noise variance is:

SNR

Assuming that an encoded signal has bits:

The power of the signal is and and is the maximum absolute value of the message signal.

Assuming that the message signal fully loads the quantizer:

The SNR at the quantizer output is:

In decibels:

Hence, each extra bit that is added, adds to the output SNR. Therefore there is a tradeoff between the signal to noise ratio and the bandwidth.

Pulse Coded Modulation (PCM)

Pulse Coded Modulation.

PCM isn't normal modulation, it is actuallt a type of Analog to Digital Converter. In Pulse Coded Modulation, the message is sampled above the Nyquist rate (with a low pass filter applied to avoid aliasing) and each sample is quantized. The discrete amplitudes are then all encoded into a binary codeword.

Graph of the Pulse Coded Modulation process.

Uniform Quantization Problems

The problem with uniform quantization is that the output SNR is adversely affected by the peak to average power ratio.

Typically the small signal amplitudes occur more often that the large signal amplitudes. This means that the signal doesn't use the entire range of available quantization levels with equal probabilities. Along with this, small amplitudes aren't represented as well as large amplitudes since they are more susceptible to quantization noise.

Solution: Nonuniform Quantization

Nonuniform quantization uses quantization levels of varaible spacing which are denser at small signal amplitudes and broader at large amplitudes.

Comparison of Uniform and Non Uniform Quantization.

Solution: Companding

A practical and equivalent solution to nonuniform quantization is companding. The signal is:

  1. Compressed.
  2. Quantized.
  3. Transmitted.
  4. Expanded.

Companding corresponds to pre- and de-emphasis in FM. The message can be predistorted to achieve better performance in the presence of noise, and then the distortion can be removed at the receiver. The exact SNR gain that is obtained depends on the form of compression used. With proper companding, the output SNR can be made insensitive to peak to average power ratio. Ideally the compression are expansion are exactly the inverse of eachother.

Compander Standards: -law vs A-law

(a) $\mu$-law used in North America and Japan, (b) A-law used in most other countries of the world. Typical values in practice: $\mu = 255$, $A = 87.6$.

Line Coding

The bits of PCM, DPCM, etc. need to be converted into electrical signals. Line coding encodes the but stream for transmission though a line of cable. Line coding is used for communication between a computer CPU and peripherals or ethernet.

Desired Properties of a Line code

Types of Line codes

(a) Unipolar nonreturn-to-zero signalling. (b) Polar nonreturn-to-zero signalling. (c) Unipolar Return-to-zero signalling. (d) Bipolar return-to-zero signalling. (e) Manchester code.

 

Baseband Digital Transmission

Whilst analog communications systems reproduce transmitted waveforms accurately, digital systems are just trying to identify the transmitted signal correctly. Therefore quality of analog is measured with SNR whilst for digital, the probability of error is used instead.

Matched Fitler

A matched fitler uses a receiver than knows what shape the pulse is but that also minimises the effect of noise.

Matched Filter

The filter is linear so:

The instantaneous power of the signal component at needs to be as large as possible compared to the noise component . (Peak pulse SNR):

Derivation

Noise:

Signal:

For the that maximises :

Schwartz's Inequality states that for two energy signals , and :

Equality only holds if for an arbitrary constant .

Letting and :

This occurs if:

Hence:

Properties

Impulse response is:

is the symbol period, whilst is the transmitter pulse shape, and is the gain. This is a time reversed shifted version of .

The matched filter maximises peak pulse SNR:

For a matched filter for a rectangular pulse shape, the impulse response will be a rectangular pulse of the same duration. If this input is convolved with a rectangular pulse of duration seconds and it is equivalent to integrating for seconds, then sampling at a symbol period of seconds before resetting the intgration for the enxt pulse. A circuit that does this would consist of an integration block followed by a switch at and is known as an integrate and dump circuit.

Matched Filter for a Rectangular Pulse

Binary Baseband Communication System

A binary communication system compares a sample of the output of a matched filter with a theshold value and outputs a binary depending upon whether the sample is greater or lesser than the threshold.

Binary Baseband Communication System circuit.

It is assumed that it is the AWGN channel with double sided noise PSD of . and the mathced filter is rectangular. The effect of additive noise needs to be closely monitored since bit errors can occur.

Noise Distribution

After the matched filter, the predetection signal is:

Where the term is the binary value, and the none term is the noise .vTHis is zero mean additive white guassian noise with variance:

is a guassian rv with pdf:

Decision

If a symbol were transmitted: with pdf .

If a symbol were transmitted: with pdf .

Errors

The probability of being transmitted but being decided has probability (case i), whilst the opposite is (case ii).

Probability density functions for binary data transmission in noise: (a) symbol 0 transmitted, and (b) symbol 1 transmitted. Here $\lambda = \frac{A}{2}$.

For case i:

Where:

For case ii:

Where

The total error probability is:

Then must be schose so that is at a minimum:

Derivation

Use leibnitz rule:

Therefore:

Optimum Threshold

This would be where which when subbed into:

Calculation of

A new variable of integration is defined:

Then:

can then be expressed in terms of the Q-function:

Then:

If the energy of a pulse is and a pulse is only transmitted half the time (on average) then the average energy per bit vis:

And the noise variance is:

The probability of error in terms of energy per bit and noise PSD is:

Q-function boundaries

For large :

For small :

 

Digital Modulation

Digital Passband Modulation

In baseband, a linearly modulated waveform is given by:

Where is the sequence of symbols to be transmitted, and is the modulating pulse. In digital passband modulation:

The passband signal can be written as:

Both the I and Q components can be modulated. If only the I component is modulated, then it is BSPK, if both are modulated then it is QPSK (quadrature phase shift keying).

$s(t)$ for different values of $b_I[n]$ and $b_Q[n]$

(a) Amplitude-shift keying (ASK), (b) Phase-shift keying (PSK), (c) Frequency-shift keying

Signal constellations.

Demodulation

Digital signal transmission.

To demodulate a digital signal, coherent detection can be used:

Or noncoherent demodulation such as envelope detection can be used. This however will make no explicit effor to estimate the phase.

ASK Coherent Demodulation

In an ASK signal:

Pre-detection signal is:

After multiplication with :

After the low pass filtering:

Bit Error Rate (BER)

The PSD of in-phase noise component is , double the PSD of the original band-pass noise . Assuming equi-probable transmission of 0s and 1s, then the decision threshold must be and the probability of error is given by:

Transmission energy for a pulse is:

The average energy per bit is:

The noise variance is:

The probability of error in terms of energy per bit and noise PSD is therefore:

PSK Analysis

In an PSK signal:

Coherent detection is used to get the detection signal:

 

The pdfs for PSK for equiporbable 0s and 1s in noise.

The conditional error probalities are are:

In the first, setting and when , :

In the second, setting and when , :

Bit Error Rate

Changing the variable of integration to and when , Then:

The average energy per bit is:

The noise variance is:

The probability of error in terms of energy per bit and noise PSD:

Frequency Shift Keying

For FSK:

To recover this signal, two sets of coherent detectors are used operating at frequencies and .

Coherent FSK demodulation. The two BPF's are non-overlapping in frequency spectrum.

In each branch of the ASK detector, the output of the LPF has if symbol is present, and just if the signal is not present. Each noise term has identical statistics to .

The output if a symbol 1 was transmitted:

The output if a symbol 0 was transmitted:

Bit Error Rate for FSK

The detection threshold should be 0. THe difference from PSK is that the noise term is now . The noises in the two channels are independent because their spectra are non-overlapping. THe variances add meaning the noise variance doubles.

The average energy per bit is:

The noise variance is:

The probability of error in terms of energy per bit and noise PSD is:

Sum of two RVs

The variance for where is a random variable with variance :

For independent variables:

For zero-mean random variables:

So:

Comparison of PSK, ASK, and FSK.

 

Non coherent Demodulation

Coherent demodulation assumes perfect synchronization, however accurate phase synchronization may be difficult in a dynamic channel due to varying propagation delays, frequency drift, oscillator instability, and noise. In this case non coherent demodulation must be used. The phase is assumed to be uniformly distributed on .

ASK non coherent demodulation

Non coherent demodulation of ASK.

Output of the BPF:

Since:

The envelope is:

When is sent the envelope (just bandpass noise) has Rayleigh distribution:

When is sent the envelope (signal and bandpass noise) has Rician distribution:

Rayleigh Distribution

has a rayleigh distribution if and are independent gaussians in the form :

 

Rayleigh Distribution.

Derivation

Change it to polar form:

Consider a small area :

Hence:

The pdf of is:

The pdf of is:

Rician Distribution

If has nonzero mean , has Rician distribution:

Where:

This is the modified zero order Bessel function of the first kind.

Rician Distribution.

Derivation

Hence:

The pdf of R is:

Error Probability

The threshold is , the error probability is dominated by symbol and is given by:

The final expression is:

Non coherent demodulation results in some performance degradation as coherent demodulation. Yet, for large SNR, performances are similar.

Non coherent demodulation of FSK

Non coherent demodulation of FSK.

When a symbol is sent the BPF outputs are:

The first has a Rayleigh distribution and the second has a Rice distribution:

and are independent.

Error occurs if Rice < Rayleigh.

The integrand is Rician density:

Differential PSK

It is impossible to demodulate PSK with an envelop detector, since PSK signals have the same frequency and amplitude. Demodulation of PSK occurs differentially, where phase reference is provided by a delayed version of the signal in the previous interval.

Differential encoding for DPSK.

Differential Demodulation

Differential demodulation.

The error probability is:

 

Illustration of DPSK.

Summary and Comparison

SchemeBit Error Rate
Coherent ASK
Coherent FSK
Coherent PSK
Non Coherent ASK
Non Coherent FSK
DPSK

Comparison of demodulation types.

 

Entropy and Data Compression

Digital communication systems model.

Amount of information in a symbol with probability :

Properties:

Log base 2 is used to represent bits .

Suppose there is an information source emitting a sequence of symbols from a finite alphabet:

This is a discrete memoryless source: The successive symbols are statistically independent and identically distributed. It is assumed that each symbol has a probability:

If a symbol has occurred this corresponds to bits of information:

The expected value of over the source alphabet is:

Source entropy is the average amount of information per source symbol with units, bits/symbol:

Entropy is the amount of uncertainty before the source is received. It shows how many bits of information per symbol there are on the average by learning the source realization. In general, entropy is maximized when the source is uniformly distributed over its alphabet. Bits are wasted if more that 1 symbol is required per symbol.

Source Coding Theory

Source encoding: concerned with minimizing the actual number of source bits that are transmitted to the user.

Channel encoding: concerned with introducing redundant bits to enable the receiver to detect and possibly correct errors that are introduced by the channel.

Average Codeword Length

The average codeword length is:

Where:

To reduce the average codeword length, symbols that occur often should be encoded with shorter codewords, whilst symbols that occur rarely may be encoded using longer codewords.

Minimum Codeword Length

In a system with symbols that are equally likely, the probability of each symbol to occur is:

One needs bits to represent the symbols.

The general case

Source Coding Theorem

The number of bits required to represent a typical sequence :

The average length for one symbol is:

Given a discrete memoryless source of entropy , average codeword length for any source coding scheme is bounded by :

Huffman Coding

Huffman coding is an efficient source coding algorithm. It yields the shortest average codeword length. The basic idea is to choose codeword lengths so that more-probable sequences have shorter codewords.

Huffman Code construction:

An example of Huffman Coding.

The average codeword length in this example is:

Huffman code is not unique (equal probabilities can be reordered) and is the most efficient prefix code.

A clearer example of Huffman Coding using a tree.

A drawback of Huffman coding is that it requires knowledge of a probabilistic model, which is not always available.

 

Channel Capacity

Introduction

In a discrete memoryless channel, the channel output is a noisy version of channel input .

The input alphabet is:

The output alphabet is:

Transition probabilities are:

Assume that the input is selected based on:

The joint probability distribution is given by:

Marginal distribution of the channel output is found as:

Conditional Entropy

This is a random variable that takes values on:

Where the probabilities are . The mean entropy is given by:

This is the amount of uncertainty after observing the channel output.

Mutual Information

Difference is the uncertainty resolved by observing channel output. Define the mutual information as:

Which results in:

Mutual information is:

Channel Coding Theorem

Capacity of a discrete memoryless channel is the maximum mutual information between the input and output, where the maximization is over all possible input probability distributions.

If the transmission rate , then there exists a coding scheme such that bits per channel use can be transmitted over the channel with an arbitrarily small probability of error.

Conversely, error probability is always bounded above zero when the transmission rate is above the capacity.

Binary Symmetric Channel

Binary Symmetric Channel.

Where:

Where . Then:

It is easy to see that this can be achieved by setting:

 

Binary Symmetric Channel graph.

Additive White Gaussian Noise (AWGN) channel

Capacity of an additive white Gaussian noise (AWGN) channel:

Where:

This rate can be achieved by:

Shannon Limit

There is a trade off between power and bandwidth. To get the same target capacity , one may increase power, decreasing bandwidth; or decrease power. increasing bandwidth.

The minimum power to send 1 bps (where C = 1):

 

Shannon Limit.

Noise and Errors

Noise can corrupt the information we wish to transmit. This is a bad thing that should be avoided, if possible. Different systems will generally require different levels of protection against errors. Consequently, a number of different techniques have been developed to detect and correct different types and number of errors.

Channel Model

The effect of noise can be measured in different ways. The most common is to specify an error probability, . If the error probability is small and information is fairly fault tolerant, it is possible to use simple methods to detect errors.

Block Codes

An important class of codes that can detect and correct some errors are block codes:

Linear Block Codes

An binary linear block code takes a block of bits of source data and encodes them using bits.

Generator Matrix

To construct a linear block code we define a matrix, the generator matrix , that converts blocks of source symbols into longer blocks corresponding to code words.

is a matrix ( rows, columns), that takes a source block (a binary vector of length ), to a code word (a binary vector of length ):

 

How the generator matrix works.

Hamming Distance

Hamming weight of a binary vector (written as ), is the number of non-zero elements it contains.

Hamming Distance between two binary vectors, and , is written , and is equal to the Hamming weight of their (Boolean) sum.

Error Detection and Correction

Error Detection

To determine the number of errors a particular code can detect and correct, the minimum Hamming distance between any two code words needs to be analysed. From linearity the zero vector must be a code word.

If the minimum distance between any two code words is defined as:

Where is the set of code words.

The number of errors that can be detected is then , since errors can turn an input code word into a different valid code word. Less than errors will turn an input code word into a vector that is not a valid code word.

Error Correction

Number t of errors that can be corrected is simply the number of errors that can be detected divided by two and rounded down to the nearest integer, since any output vector with less than this number of errors will be 'nearer' to the input code word.

(7,4) Hamming code has . It can detect one or two bit errors, and correct any single bit error.

Error Correction.